Process Consistency for Adaboost
نویسنده
چکیده
Recent experiments and theoretical studies show that AdaBoost can over t in the limit of large time. If running the algorithm forever is suboptimal, a natural question is how low can the prediction error be during the process of AdaBoost? We show under general regularity conditions that during the process of AdaBoost a consistent prediction is generated, which has the prediction error approximating the optimal Bayes error as the sample size increases. This result suggests that, while running the algorithm forever can be suboptimal, it is reasonable to expect that some regularization method via truncation of the process may lead to a near-optimal performance.
منابع مشابه
AdaBoost is Consistent
The risk, or probability of error, of the classifier produced by the AdaBoost algorithm is investigated. In particular, we consider the stopping strategy to be used in AdaBoost to achieve universal consistency. We show that provided AdaBoost is stopped after n iterations—for sample size n and ν < 1—the sequence of risks of the classifiers it produces approaches the Bayes risk if Bayes risk L∗ > 0.
متن کاملConsistency of Nearest Neighbor Methods
In this lecture we return to the study of consistency properties of learning algorithms, where we will be interested in the question of whether the generalization error of the function learned by an algorithm approaches the Bayes error in the limit of infinite data. In particular, we will consider consistency properties of the simple k-nearest neighbor (k-NN) classification algorithm (in the ne...
متن کاملBayes Risk Consistency of Large Margin Binary Classification Methods - A Survey
In this paper we survey the Bayes Risk Consistency of various Large Margin Binary Classifiers. Many classification algorithms minimize a tractable convex surrogate φ of the 0-1 loss function. For e.g. the SVM (Support Vector Machine) and AdaBoost, which minimize the hinge loss and the exponential loss respectively. By imposing some sort of regularization conditions, it is possible to demonstrat...
متن کاملApproximation Stability and Boosting
Stability has been explored to study the performance of learning algorithms in recent years and it has been shown that stability is sufficient for generalization and is sufficient and necessary for consistency of ERM in the general learning setting. Previous studies showed that AdaBoost has almost-everywhere uniform stability if the base learner has L1 stability. The L1 stability, however, is t...
متن کاملDiscussion of Boosting Papers
We congratulate the authors for their interesting papers on boosting and related topics. Jiang deals with the asymptotic consistency of Adaboost. Lugosi and Vayatis study the convex optimization of loss functions associated with boosting. Zhang studies the loss functions themselves. Their results imply that boosting-like methods can reasonably be expected to converge to Bayes classifiers under ...
متن کامل